134 research outputs found
Atlas-powered deep learning (ADL) -- application to diffusion weighted MRI
Deep learning has a great potential for estimating biomarkers in diffusion
weighted magnetic resonance imaging (dMRI). Atlases, on the other hand, are a
unique tool for modeling the spatio-temporal variability of biomarkers. In this
paper, we propose the first framework to exploit both deep learning and atlases
for biomarker estimation in dMRI. Our framework relies on non-linear diffusion
tensor registration to compute biomarker atlases and to estimate atlas
reliability maps. We also use nonlinear tensor registration to align the atlas
to a subject and to estimate the error of this alignment. We use the biomarker
atlas, atlas reliability map, and alignment error map, in addition to the dMRI
signal, as inputs to a deep learning model for biomarker estimation. We use our
framework to estimate fractional anisotropy and neurite orientation dispersion
from down-sampled dMRI data on a test cohort of 70 newborn subjects. Results
show that our method significantly outperforms standard estimation methods as
well as recent deep learning techniques. Our method is also more robust to
stronger measurement down-sampling factors. Our study shows that the advantages
of deep learning and atlases can be synergistically combined to achieve
unprecedented accuracy in biomarker estimation from dMRI data
Improving Calibration and Out-of-Distribution Detection in Medical Image Segmentation with Convolutional Neural Networks
Convolutional Neural Networks (CNNs) have shown to be powerful medical image
segmentation models. In this study, we address some of the main unresolved
issues regarding these models. Specifically, training of these models on small
medical image datasets is still challenging, with many studies promoting
techniques such as transfer learning. Moreover, these models are infamous for
producing over-confident predictions and for failing silently when presented
with out-of-distribution (OOD) data at test time. In this paper, we advocate
for multi-task learning, i.e., training a single model on several different
datasets, spanning several different organs of interest and different imaging
modalities. We show that not only a single CNN learns to automatically
recognize the context and accurately segment the organ of interest in each
context, but also that such a joint model often has more accurate and
better-calibrated predictions than dedicated models trained separately on each
dataset. Our experiments show that multi-task learning can outperform transfer
learning in medical image segmentation tasks. For detecting OOD data, we
propose a method based on spectral analysis of CNN feature maps. We show that
different datasets, representing different imaging modalities and/or different
organs of interest, have distinct spectral signatures, which can be used to
identify whether or not a test image is similar to the images used to train a
model. We show that this approach is far more accurate than OOD detection based
on prediction uncertainty. The methods proposed in this paper contribute
significantly to improving the accuracy and reliability of CNN-based medical
image segmentation models
TBSS++: A novel computational method for Tract-Based Spatial Statistics
Diffusion-weighted magnetic resonance imaging (dMRI) is widely used to assess
the brain white matter. One of the most common computations in dMRI involves
cross-subject tract-specific analysis, whereby dMRI-derived biomarkers are
compared between cohorts of subjects. The accuracy and reliability of these
studies hinges on the ability to compare precisely the same white matter tracts
across subjects. This is an intricate and error-prone computation. Existing
computational methods such as Tract-Based Spatial Statistics (TBSS) suffer from
a host of shortcomings and limitations that can seriously undermine the
validity of the results. We present a new computational framework that
overcomes the limitations of existing methods via (i) accurate segmentation of
the tracts, and (ii) precise registration of data from different
subjects/scans. The registration is based on fiber orientation distributions.
To further improve the alignment of cross-subject data, we create detailed
atlases of white matter tracts. These atlases serve as an unbiased reference
space where the data from all subjects is registered for comparison. Extensive
evaluations show that, compared with TBSS, our proposed framework offers
significantly higher reproducibility and robustness to data perturbations. Our
method promises a drastic improvement in accuracy and reproducibility of
cross-subject dMRI studies that are routinely used in neuroscience and medical
research
Fetal-BET: Brain Extraction Tool for Fetal MRI
Fetal brain extraction is a necessary first step in most computational fetal
brain MRI pipelines. However, it has been a very challenging task due to
non-standard fetal head pose, fetal movements during examination, and vastly
heterogeneous appearance of the developing fetal brain and the neighboring
fetal and maternal anatomy across various sequences and scanning conditions.
Development of a machine learning method to effectively address this task
requires a large and rich labeled dataset that has not been previously
available. As a result, there is currently no method for accurate fetal brain
extraction on various fetal MRI sequences. In this work, we first built a large
annotated dataset of approximately 72,000 2D fetal brain MRI images. Our
dataset covers the three common MRI sequences including T2-weighted,
diffusion-weighted, and functional MRI acquired with different scanners.
Moreover, it includes normal and pathological brains. Using this dataset, we
developed and validated deep learning methods, by exploiting the power of the
U-Net style architectures, the attention mechanism, multi-contrast feature
learning, and data augmentation for fast, accurate, and generalizable automatic
fetal brain extraction. Our approach leverages the rich information from
multi-contrast (multi-sequence) fetal MRI data, enabling precise delineation of
the fetal brain structures. Evaluations on independent test data show that our
method achieves accurate brain extraction on heterogeneous test data acquired
with different scanners, on pathological brains, and at various gestational
stages. This robustness underscores the potential utility of our deep learning
model for fetal brain imaging and image analysis.Comment: 10 pages, 6 figures, 2 TABLES, This work has been submitted to the
IEEE Transactions on Medical Imaging for possible publication. Copyright may
be transferred without notice, after which this version may no longer be
accessibl
- …